Part 3: time to mix
Key idea:
leave one data point out of the data
re-fit the model
predict the value for that one data point and compare with observed value
re-do this n times
loo packageLeave-one-out as described is almost impossible!
loo uses a “shortcut making use of the mathematics of Bayesian inference” 1
Result: (): “expected log predictive density” (higher () implies better model fit without being sensitive for over-fitting!)
loo codeloo codeWritingData.RDataExperimental study on Writing instructions
2 conditions:
Open WritingData.RData
Estimate 3 models with SecondVersion as dependent variable
FirstVersion_GM + random effect of Class ((1|Class))FirstVersion_GM ((1 + FirstVersion_GM |Class))Experimental_conditionCompare the models on their fit
What do we learn?
Make a summary of the best fitting model
Something to worry about!
Essentially: sampling of parameter estimate values went wrong
Fixes:
control = list(adapt_delta = 0.9)) worksDo not hesitate to contact me!